28 research outputs found

    Accurately approximating algebraic tomographic reconstruction by filtered backprojection

    Get PDF
    In computed tomography, algebraic reconstruction methods tend to produce reconstructions with higher quality than analytical methods when presented with limited and noisy projection data. The high computational requirements of algebraic methods, however, limit their usefulness in practice. In this paper, we propose a method to approximate the algebraic SIRT method by the computationally efficient filtered backprojection method. The method is based on an efficient way of computing a special angle-dependent convolution filter for filtered backprojection. Using this method, a reconstruction quality that is similar to SIRT can be achieved by existing efficient implementations of the filtered backprojection method. Results for a phantom image show that the method is indeed able to produce reconstructions with a quality similar to algebraic methods when presented with limited and noisy projection data

    A medium-grain method for fast 2D bipartitioning of sparse matrices

    Get PDF
    We present a new hypergraph-based method, the medium-grain method, for solving the sparse matrix partitioning problem. This problem arises when distributing data for parallel sparse matrix-vector multiplication. In the medium-grain method, each matrix nonzero is assigned to either a row group or a column group, and these groups are represented by vertices of the hypergraph. For an m x n sparse matrix, the resulting hypergraph has m + n vertices and m + n hyperedges. Furthermore, we present an iterative refinement procedure for improvement of a given partitioning, based on the medium-grain method, which can be applied as a cheap but effective postprocessing step after any partitioning method. The medium-grain method is able to produce fully two-dimensional bipartitionings, but its computational complexity equals that of one-dimensional methods. Experimental results for a large set of sparse test matrices show that the medium-grain method with iterative refinement produces bipartitionings with lower communication volume compared to current state-of-the-art methods, and is faster at producing them

    Foam-like phantoms for comparing tomography algorithms

    Get PDF
    Tomographic algorithms are often compared by evaluating them on certain benchmark datasets. For fair comparison, these datasets should ideally (i) be challenging to reconstruct, (ii) be representative of typical tomographic experiments, (iii) be flexible to allow for different acquisition modes, and (iv) include enough samples to allow for comparison of data-driven algorithms. Current approaches often satisfy only some of these requirements, but not all. For example, real-world datasets are typically challenging and representative of a category of experimental examples, but are restricted to the acquisition mode that was used in the experiment and are often limited in the number of samples. Mathematical phantoms are often flexible and can sometimes produce enough samples for data-driven approaches, but can be relatively easy to reconstruct and are often not representative of typical scanned objects. In this paper, we present a family of foam-like ma

    Real-time segmentation for tomographic imaging

    Get PDF
    In tomography, reconstruction and analysis is often performed once the acquisition has been completed due to the computational cost of the 3D imaging algorithms. In contrast, real-time reconstruction and analysis can avoid costly repetition of experiments and enable optimization of experimental parameters. Recently, it was shown that by reconstructing a subset of arbitrarily oriented slices, real-time quasi-3D reconstruction can be attained. Here, we extend this approach by including realtime segmentation, thereby enabling real-time analysis during the experiment. We propose to use a convolutional neural network (CNN) to perform real-time image segmentation and introduce an adapted training strategy in order to apply CNNs to arbitrarily oriented slices. We evaluate our method on both simulated and real-world data. The experiments show that our approach enables realtime tomographic segmentation for real-world applications and outperforms standard unsupervised segmentation methods

    Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    Get PDF
    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method

    Improving reproducibility in synchrotron tomography using implementation-adapted filters

    Get PDF
    For reconstructing large tomographic datasets fast, filtered backprojection-type or Fourier-based algorithms are still the method of choice, as they have been for decades. These robust and computationally efficient algorithms have been integrated in a broad range of software packages. The continuous mathematical formulas used for image reconstruction in such algorithms are unambiguous. However, variations in discretization and interpolation result in quantitative differences between reconstructed images, and corresponding segmentations, obtained from different software. This hinders reproducibility of experimental results, making it difficult to ensure that results and conclusions from experiments can be reproduced at different facilities or using different software. In this paper, a way to reduce such differences by optimizing the filter used in analytical algorithms is proposed. These filters can be computed using a wrapper routine around a black-box implementation of a reconstruction algorithm, and lead to quantitatively similar reconstructions. Use cases for this approach are demonstrated by computing implementation-adapted filters for several open-source implementations and applying them to simulated phantoms and real-world data acquired at the synchrotron. Our contribution to a reproducible reconstruction step forms a building block towards a fully reproducible synchrotron tomography data processing pipeline

    Real-time reconstruction and visualisation towards dynamic feedback control during time-resolved tomography experiments at TOMCAT

    Get PDF
    Tomographic X-ray microscopy beamlines at synchrotron light sources worldwide have pushed the achievable time-resolution for dynamic 3-dimensional structural investigations down to a fraction of a second, allowing the study of quickly evolving systems. The large data rates involved impose heavy demands on computational resources, making it difficult to readily process and interrogate the resulting volumes. The data acquisition is thus performed essentially blindly. Such a sequential process makes it hard to notice problems with the measurement protocol or sample conditions, potentially rendering the acquired data unusable, and it keeps the user from optimizing the experimental parameters of the imaging task at hand. We present an efficient approach to address this issue based on the real-time reconstruction, visualisation and on-the-fly an

    Task-driven learned hyperspectral data reduction using end-to-end supervised deep learning

    Get PDF
    An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods

    A tomographic workflow to enable deep learning for X-ray based foreign object detection

    Get PDF
    Detection of unwanted (‘foreign’) objects within products is a common procedure in many branches of industry for maintaining production quality. X-ray imaging is a fast, non-invasive and widely applicable method for foreign object detection. Deep learning has recently emerged as a powerful approach for recognizing patterns in radiographs (i.e., X-ray images), enabling automated X-ray based foreign object detection. However, these methods require a large number of training examples and manual annotation of these examples is a subjective and laborious task. In this work, we propose a Computed Tomography (CT) based method for producing training data for supervised learning of foreign object detection, with minimal labor requirements. In our approach, a few representative objects are CT scanned and reconstructed in 3D. The radiographs that are acquired as part of the CT-scan data serve as input for the machine learning method. High-quality ground truth locations of the foreign objects are obtained through accurate 3D reconstructions and segmentations. Using these segmented volumes, corresponding 2D segmentations are obtained by creating virtual projections. We outline the benefits of objectively and reproducibly generating training data in this way. In addition, we show how the accuracy depends on the number of objects used for the CT reconstructions. The results show that in this workflow generally only a relatively small number of representative objects (i.e., fewer than 10) are needed to achieve adequate detection performance in an industrial setting
    corecore